A Cluster Computing Implementation of a Particle Growth Simulator

نویسندگان

  • Kenneth P. Williams
  • Shirley A. Williams
چکیده

Clusters of computers can be used together to provide a powerful computing resource. This paper presents a con gurable system capable of modelling a variety of particle growth mechanisms. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take a considerable time to execute on conventional workstations. The repetitive nature of Monte Carlo simulations means there is much implicit parallelism which is suitable for exploitation across a distributed memory system. PVM (Parallel Virtual Machine) [1] constructs add parallelism to C programs by providing a set of library routines o ering synchronization, consensus, asynchronous message-passing and broadcast communication. By spreading the work of the simulation across a cluster of workstations, the elapsed execution time can be greatly reduced. The implementation described is executed across a dynamically recon gurable campus-wide network of SUN-4 workstations operating within a multi-user domain. Particle crystal growth can be modelled by a Monte Carlo simulation of the polynuclear growth mechanism [2]. In this paper we show how a computer simulation of four-dimensional nucleation can be spread across a cluster of workstations to allow a fast and e cient investigation of growth rate and volume occupation for a variety of nucleation sites. The growth of a particle can be modelled in terms of the time taken by a growing particle to cross a test point. Our aim was to develop a method to simulate the growth of catalyst particles. Within a grid a number of random test points and nucleation sites are set and the time to cover increasing fractions of the test points by the growth from nucleation sites is calculated. By repeating 1 this process many times and averaging the time taken to occupy percentages of the volume accurate simulations can be achieved. The growth rate can then be calculated from the simulated volume occupied. The number of nucleation points, their growth rate, and the area or volume will vary according to the growth kinetics that are to be modelled. Similarly, edge e ects may be dealt with in a variety of manners, including, wrap-around and windowing. These ideas also generalise to three dimensional growth-space where the terms surface and coverage are replaced by the concepts of volume and occupation respectively. None of these re nements however a ect the algorithm described. The need to perform thousands of calculations meant that each simulation was very time consuming and so we decided to spread the calculation across a cluster of workstations. There was ample potential for parallelism within the algorithm. For instance, it would be possible to calculate in parallel the growth of each nucleation site; however this would have required a great amount of data passing between processors, a very slow approach across a network. We decided to replicate the simulation of n nucleation points across a number of workstations with each slave node calculating the algorithm and all slaves passing their results back to a master for averaging. For example, each of twenty slaves may calculate the average result of ve hundred simulations and pass their results back to the master, which could then average the results across the twenty sets returned. The total e ect would be the same as running the simulation ten thousand times. If each of the slaves uses the same random number generator they will pro2 duce the same results, unless each slave starts the random number generatorwith a di erent seed. To provide each slave with a di erent seed we had themaster generating a set of random seeds using a simple random number gener-ator (rand) and then pass the seeds to the slaves.Each slave uses its own seed to start a more sophisticated random numbergenerator (lrand) at a personalised point. Thus the parallel algorithm consistsof a single master and a number of identical slaves.The algorithm was implemented in C, and PVM constructs were added toenable the program to be run across a cluster of workstations. The program wastested across the network and the elapsed time from start to nish of executionwas recorded. This allowed us to provide a comparison of the time the userhad to wait to receive results for di ering cluster con gurations.Results for fty nucleation points show the elapsed time taken to achieve3200 iterations of the basic algorithm, for example two slaves performing 1600iterations each or twenty slaves performing 160 iterations each. The timingsfor no slaves were achieved by creating a sequential version of the programcontaining the master and slave code and removing the calls to PVM. Moni-toring the execution results showed that slow times were in general due to onecomputer in the cluster taking a long time to calculate its portion of the work,usually because another user was performing computationally intensive workon that particular machine. However such problems are experienced in anysystem where more than one user is allowed on a machine. Other problemscame when, due to failures, there were less machines in the cluster than thenumber of slaves required, this led to one or more machines taking on doublethe workload.3 The results also indicate that the speed-up achieved is almost linear. Ajob that would have taken approximately twelve minutes on a single worksta-tion can be completed in one minute by spreading the workload across twentyworkstations. These results are presented in graphical format.Future work will include modelling of particles with variable growth ratesand also of varying shapes (eg, square and rectangular crystals, etc); we alsoplan to replicate edge e ects of surfaces where the growth of two particlesoverlaps. The current approach relies on the programmer determining howmany iterations are necessary. We intend to investigate the ways in which wecan vary the number of runs until within an acceptable margin of error. Thiswould have the additional bene t of allowing us to start up more slaves thanwe should need and produce nal results before the slowest slaves may haveterminated.The system represents a `real-world' application of cluster computing en-abling the user to obtain apparently the performance of a supercomputer, byusing the spare cycles on other workstations, without the prohibitive costs as-sociated with supercomputing applications.References[1] V.Sunderam, PVM: A Framework for Parallel Distributed Computing, Concur-rency Practice and Experience, 2, (1990), 315-339.[2] S.A.Williams, P.C.H.Mitchell and G.E.Fagg, A Cluster Computing Implemen-tation of a Monte Carlo Simulation of a Particle Growth Mechanism, ParallelAlgorithms and Applications, Vol 4, (1994), pgs 61-66.4

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parallel Spatial Pyramid Match Kernel Algorithm for Object Recognition using a Cluster of Computers

This paper parallelizes the spatial pyramid match kernel (SPK) implementation. SPK is one of the most usable kernel methods, along with support vector machine classifier, with high accuracy in object recognition. MATLAB parallel computing toolbox has been used to parallelize SPK. In this implementation, MATLAB Message Passing Interface (MPI) functions and features included in the toolbox help u...

متن کامل

Implementation of a Real-Time Distributed Network Simulator with PC-Cluster

This paper describes the implementation of a real-time power system simulator based on a distributed cluster of PC desktop computers. A Real-time power system simulator based on a PC-cluster can successfully cope with the size requirements of growing power systems and the computational demands of fast transient studies. The distributed version of OVNI is used as core solver in addition to a dev...

متن کامل

Two and Three Dimensional Monte Carlo Simulation of Magnetite Nanoparticle Based Ferrofluids

We have simulated a magnetite nanoparticle based ferrofluid using Monte Carlo method. Two and three dimensional Monte Carlo simulations have been done using parallel computing technique. The aggregation and rearrangement of nanoparticles embedded in a liquid carrier have been studied in various particle volume fractions. Our simulation results are in complete agreement with the reported experim...

متن کامل

Implementation of parallel plasma particle-in-cell codes on PC cluster

Plasma particle-in-cell (PIC) codes model the interaction of charged particles with the surrounding fields, and they have been implemented on many advanced parallel computers. Recently, many PC clusters which consist of inexpensive PCs have been developed to do parallel computing, and we also build such a PC cluster. In this paper, we present the implementation of a parallel plasma PIC code on ...

متن کامل

A MR Simulator in Facilitating Cloud Computing

MapReduce is an enabling technology in support of Cloud Computing. Hadoop which is a mapReduce implementation has been widely used in developing MapReduce applications. This paper presents Hadoop simulatorHaSim, MapReduce simulator which builds on top of Hadoop. HaSim models large number of parameters that can affect the behaviors of MapReduce nodes, and thus it can be used to tune the performa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995